大多数怀孕和出生会导致良好的结果,但是并不常见,当发生时,它们可能会与母亲和婴儿的严重影响相关。预测建模有可能通过更好地理解风险因素,增强监视以及更及时,更适当的干预措施来改善结果,从而帮助产科医生提供更好的护理。对于三种类型的并发症,我们使用可解释的提升机(EBM)(玻璃箱模型)来识别和研究最重要的风险因素,以获得清晰度:(i)严重的孕妇发病率(SMM),(ii)(iii)早产启示性。在使用EBM的解释性来揭示出对风险促成的特征的惊人见解时,我们的实验表明EBM与其他黑盒ML方法(例如深神经网和随机森林)的准确性相匹配。
translated by 谷歌翻译
机器学习(ML)可解释性技术可以揭示数据中的不良模式,这些模型模型开发以做出预测 - 一旦部署就会​​造成危害。但是,如何采取行动解决这些模式并不总是很清楚。在ML与人类计算机互动研究人员,医师和数据科学家之间的合作中,我们开发了GAM Changer,这是第一个互动系统,可帮助域专家和数据科学家轻松,负责任地编辑通用的添加剂模型(GAM)和修复有问题的模式。借助新颖的交互技术,我们的工具将可解释性置于行动中 - 使用户能够分析,验证和使模型行为与知识和价值相结合。医师已经开始使用我们的工具来调查和修复肺炎和败血症的风险预测模型,以及在不同领域工作的7位数据科学家的评估突出显示我们的工具易于使用,满足他们的模型编辑需求,并适合他们当前的工作流程。我们的工具以现代网络技术为基础,在用户的网络浏览器或计算笔记本电脑中本地运行,从而降低了使用的障碍。 GAM Changer可在以下公共演示链接中获得:https://interpret.ml/gam-changer。
translated by 谷歌翻译
最近在可解释的机器学习中的进展(ML)研究表明,模型利用数据中的不良模式来进行预测,这可能导致部署危害。但是,尚不清楚我们如何解决这些模型。我们介绍了我们正在进行的工作,游戏改变者,一个开源交互式系统,以帮助数据科学家和领域专家轻松且负责任地编辑其广义添加剂模型(Gams)。通过新颖的可视化技术,我们的工具将可解释性投入到行动 - 使人类用户能够分析,验证和对齐模型行为与他们的知识和价值。使用现代Web技术建造,我们的工具在用户的计算笔记本或Web浏览器中在本地运行,而无需额外计算资源,降低屏障以创建更负责的ML模型。Gam更换器可在https://interpret.ml/gam-changer中获得。
translated by 谷歌翻译
尽管强化学习(RL)在许多领域都取得了巨大的成功,但是当很难指定奖励并且不允许探索奖励时,将RL应用于医疗保健等现实世界中的挑战。在这项工作中,我们专注于恢复临床医生在治疗患者方面的回报。我们结合了理由,根据其潜在的未来结果来解释临床医生的治疗方法。我们使用通用的添加剂模型(GAM) - 一类准确的,可解释的模型 - 恢复奖励。在模拟和现实世界医院的数据集中,我们显示模型的表现优于基准。最后,在治疗患者时,我们的模型的解释符合几个临床准则,而我们发现常用的线性模型通常与它们相矛盾。
translated by 谷歌翻译
在真正的高风险环境中部署机器学习模型(例如医疗保健)通常不仅取决于模型的准确性,而且还取决于其公平性,鲁棒性和可解释性。广义添加剂模型(Gams)是一类具有悠久的可解释模型,这些模型在这些高风险域中使用了悠久的使用,但它们缺乏深度学习的理想特征,例如可分利用和可扩展性。在这项工作中,我们提出了一个神经游戏(Node-Gam)和神经GA $ ^ 2 $ m(node-ga $ ^ 2 $ m),比展出良好,而不是大型数据集上的其他gam更好,同时剩下可解释其他集合和深层学习模式。我们展示了我们的模型在数据中找到了有趣的模式。最后,我们表明我们通过自我监督的预培训提高了模型准确性,这是不可分辨性的游戏不可能的改进。
translated by 谷歌翻译
Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this paper we empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow neural nets can learn these deep functions using the same number of parameters as the original deep models. On the TIMIT phoneme recognition and CIFAR-10 image recognition tasks, shallow nets can be trained that perform similarly to complex, well-engineered, deeper convolutional architectures.
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译